Sparse Wide-Baseline Virtual View Generation

نویسندگان

  • Engin Tola
  • Pascal Fua
چکیده

Given a number of images of a scene, Virtual View Generation (VVG) is the process of generating the images that would have been seen from viewpoints other than the one of the input images. As shown in Fig. 1, the movie industry already uses VVG for visual effect generation in films such as “The Matrix.” It could also allow a director to modify a scene by changing the viewpoint without the trouble and expense of reshooting it. Similarly, it is a necessary component of free viewpoint tv, which would allow a spectator to watch a show from any desired viewpoint. In many of today’s VVG systems, the approach is to shoot the scene using very many cameras and generate the video with a predefined trajectory that cannot depart much from the input camera array positions. Image synthesis mostly involves interpolation or re-sampling [7, 5]. In this thesis, we will develop an approach to VVG using a more limited number of cameras than what is the norm today. This will make VVG much more practical and flexible since it will require less effort to synchronize and calibrate and it will become possible to apply VVG to a more set of applications like surveillance systems where it is impractical and expensive to install a large number of cameras. However, achieving this will involve solving a number of severe difficulties. Since we do not have a dense sampling of the imaging space, interpolation and resampling based methods are not suitable. Therefore, additional cues like depth must be included to the formulation. Once the depth is known, it is not very difficult to synthesize new images. However, our case involves dense wide-baseline matching and handling of large occlusions resulting from the regions where only one camera is able to capture. These are problematic issues because:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Wide-Baseline Multi-View Video Segmentation For 3D Reconstruction

Obtaining a foreground silhouette across multiple views is one of the fundamental steps in 3D reconstruction. In this paper we present a novel video segmentation approach, to obtain a foreground silhouette, for scenes captured by a wide-baseline camera rig given a sparse manual interaction in a single view. The algorithm is based on trimap propagation, a framework used in video matting. Bayesia...

متن کامل

Object virtual viewing using adaptive tri-view morphing

This paper proposes a new technique for generating an arbitrary virtual view of an object of interest given a set of images taken from around that object. The algorithm extends Xiao and Shah’s tri-view morphing scheme to work with wide baseline imagery. Our method performs feature detection and feature matching across three views then blends the real views into a virtual view. Tri-view morphing...

متن کامل

Augmented Reality in a Wide Area Sentient Environment

Augmented Reality (AR) both exposes and supplements the user’s view of the real world. Previous AR work has focussed on the close registration of real and virtual objects, which requires very accurate real-time estimates of head position and orientation. Most of these systems have been tethered and restricted to small volumes. In contrast, we have chosen to concentrate on allowing the AR user t...

متن کامل

Articulated 3-d Modelling in a Wide-baseline Disparity Space

Image-based novel-view synthesis requires dense correspondences between the original views to produce a high quality synthetic view. In a wide-baseline stereo setup, dense correspondences are difficult to achieve due to the significant change in viewpoint giving rise to a number of problems. To improve their quality, the original, incomplete disparity maps are usually interpolated to fill in th...

متن کامل

Appearance-based virtual view generation from multicamera videos captured in the 3-D room

We present an appearance-based virtual view generation method that allows viewers to fly through a real dynamic scene. The scene is captured by multiple synchronized cameras. Arbitrary views are generated by interpolating two original camera-views near the given viewpoint. The quality of the generated synthetic view is determined by the precision, consistency and density of correspondences betw...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007